perm filename PROBLE[F83,JMC]1 blob
sn#727166 filedate 1983-10-10 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 proble[f83,jmc] Good problems in AI
C00008 ENDMK
Cā;
proble[f83,jmc] Good problems in AI
1. An important characteristic of a problem is the extent to which it
admits a description in logic from which the solution follows. The
most non-logical problems are those which involve continuously
viewing physical objects, predicting how our actions will affect
them and controlling actions. Perhaps we can say that whenever
deciding what to do requires keeping one's eyes open, the
problem has an essential non-logical component.
The problem with physical object problems is that the reasoning
is not done in anything like logic (or so it seems), and it isn't
clear that one could make a computer do it in logic.
2. At the opposite pole from physical problems are purely symbolic ones.
We leave mathematical problems aside, because we want to look
at some new ground. Consider therefore a problem involving buying
and selling or winning votes for one's side of an issue. We can
imagine that written messages, even teletype messages, are being
exchanged. Is the reasoning then logical? At least partly, but
until it is formalized, we can't be sure.
3. Consider history, say political or military, and also stories
including novels, short stories and opera plots. To what extent
do the actions taken follow as logical consequences of the goals
of the individuals concerned? To what extent are they predictable
at all? The experiment would be to ask a person to read up to
a certain point and then answer questions about what might happen
next. Some of the questions would ask for a flat prediction of
what is the rational action for one of the parties given his goals.
Other questions might merely try to have him exclude actions that
were not taken while not excluding the action that was taken.
Once human performance is calibrated, we can try for formal
descriptions and programs for answering the questions. Success for
logic is if our program does as well as the humans. If we fail,
we can ask what went wrong and what additional mechanisms are needed.
4. common[f83,jmc] has material about the non-logical aspects of
"natural language reasoning" that we won't repeat here.
5. Preventing the dogs from knocking over the trash can.
Getting a hotel reservation in Boston from Kennedy Airport
when the phone strike prevents a direct call from a pay booth.
6. Block stacking is often taken as a paradigmatic problem solving
problem, but I don't see anyone taking advantage of the general
feature of block stacking that makes it easy for humans. Namely,
if we build the desired towers from the bottom up, i.e. order our
goals in that way, no goal ever has to be undone once it is achieved.
Therefore, it is easy to build a program for block stacking.
This is a general feature of many problem solving situations -
conjunctive goals can be taken in an order that precludes (more
generaly almost precludes) ever having to take back any goals.
It would be interesting to try to write a program that will look
for this order in a particular problem. Thus given some description
of the general block stacking problem, it would determine that
building the towers from the bottom up would constitute irrevocable
progress.